The increase in the number of unmanned aerial vehicles a.k.a. drones pose several threats to public privacy, critical infrastructure and cyber security. Hence, detecting unauthorized drones is a significant problem which received attention in the last few years. In this paper, we present our experimental work on three drone detection methods (i.e., acoustic detection, radio frequency (RF) detection, and visual detection) to evaluate their efficacy in both indoor and outdoor environments. Owing to the limitations of these schemes, we present a novel encryption-based drone detection scheme that uses a two-stage verification of the drone's received signal strength indicator (RSSI) and the encryption key generated from the drone's position coordinates to reliably detect an unauthorized drone in the presence of authorized drones.
translated by 谷歌翻译
Quantitative cephalometric analysis is the most widely used clinical and research tool in modern orthodontics. Accurate localization of cephalometric landmarks enables the quantification and classification of anatomical abnormalities, however, the traditional manual way of marking these landmarks is a very tedious job. Endeavours have constantly been made to develop automated cephalometric landmark detection systems but they are inadequate for orthodontic applications. The fundamental reason for this is that the amount of publicly available datasets as well as the images provided for training in these datasets are insufficient for an AI model to perform well. To facilitate the development of robust AI solutions for morphometric analysis, we organise the CEPHA29 Automatic Cephalometric Landmark Detection Challenge in conjunction with IEEE International Symposium on Biomedical Imaging (ISBI 2023). In this context, we provide the largest known publicly available dataset, consisting of 1000 cephalometric X-ray images. We hope that our challenge will not only derive forward research and innovation in automatic cephalometric landmark identification but will also signal the beginning of a new era in the discipline.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Low-rank and sparse decomposition based methods find their use in many applications involving background modeling such as clutter suppression and object tracking. While Robust Principal Component Analysis (RPCA) has achieved great success in performing this task, it can take hundreds of iterations to converge and its performance decreases in the presence of different phenomena such as occlusion, jitter and fast motion. The recently proposed deep unfolded networks, on the other hand, have demonstrated better accuracy and improved convergence over both their iterative equivalents as well as over other neural network architectures. In this work, we propose a novel deep unfolded spatiotemporal RPCA (DUST-RPCA) network, which explicitly takes advantage of the spatial and temporal continuity in the low-rank component. Our experimental results on the moving MNIST dataset indicate that DUST-RPCA gives better accuracy when compared with the existing state of the art deep unfolded RPCA networks.
translated by 谷歌翻译
历史上,内容一直是用于研究在线社区语言的主要镜头。相反,本文重点介绍了社区的语言风格。虽然我们知道个人具有可区分的风格,但我们在这里询问社区是否具有可区分的风格。此外,尽管先前的工作依赖于风格的狭义定义,但我们采用了一个广泛的定义,涉及262个功能来分析来自3个社交媒体平台的9个在线社区的语言风格,讨论政治,电视和旅行。我们发现社区确实具有不同的风格。此外,样式是小组成员资格的出色预测指标(F-评分0.952和准确性96.09%)。虽然平均而言,它在统计学上等同于仅使用内容的预测,但它对于减少培训数据的弹性更具弹性。
translated by 谷歌翻译
人们认识到,感官感知和语言通过心理学,神经科学和感官语言学的众多研究具有互连。在这种丰富的背景下,我们询问在著作中使用感官语言是否是语言风格的一部分?从样式计量学研究的角度来看,这个问题很重要,在该研究中,已经探索了丰富的语言功能,但对与感觉语言相关的功能的关注不足。以此为目标,我们探索了关于歌词,小说和诗歌集合中的感官语言和风格的几个角度。例如,我们发现个人使用感官语言不是一种随机现象。选择可能涉及。同样,感官风格通常会随着时间的推移而稳定 - 转移非常小。此外,只需从具有感官术语的几百个句子中提取样式。我们还确定每种类型中的代表性和独特特征。例如,我们观察到,小说收集的前6个代表性特征中有4个涉及使用嗅觉语言的个人,我们希望他们使用非富特语言。
translated by 谷歌翻译
数字图像包含大量冗余,因此,应用了压缩以减少图像尺寸而不会损失合理的图像质量。在包含图像序列的视频的情况下,在包含图像序列和更高的压缩比中,在低吞吐量网络中实现了相同的突出。评估这种情况下的图像质量变得特别兴趣。大多数情景中的主观评估变得不可行,因此客观评估是首选。在三种客观质量措施中,全文和减少参考方法需要某种形式的原始图像来计算在广播或IP视频等情景中不可行的质量分数。因此,提出了一种非参考质量度量来评估计算亮度和多尺度梯度统计的数字图像的质量,以及平均减去对比度标准化产品作为具有缩放共轭梯度的前馈神经网络的特征。训练有素的网络提供了良好的回归和R2测量,并进一步测试实时图像质量评估数据库第2版已显示有前途的结果。 Pearson,Kendall和Spearman的相关性是计算预测和实际质量评分之间的相关性,结果与最先进的系统相当。此外,所提出的指标的计算方式比其对应物更快,并且可以用于图像序列的质量评估。
translated by 谷歌翻译
跨模式的人重新识别(RE-ID)对于现代视频监视系统至关重要。关键的挑战是与一个人提供的语义信息引起的跨模式表示,并忽略背景信息。这项工作介绍了一种新型的基于卷积神经网络(CNN)的体系结构,旨在学习语义上的跨模式视觉和文本表示。基础构建块,名为Axm-block,是一个统一的多层网络,该网络会动态利用多尺度知识,并根据共享语义重新校准每种模式。为了补充卷积设计,在文本分支中应用上下文注意力以操纵长期依赖性。此外,我们提出了一种独特的设计,以增强基于视觉零件的功能连贯性和局部性信息。我们的框架具有新颖的能力,可以在功能学习阶段隐式学习模式之间的一致语义。统一的特征学习有效地利用文本数据作为视觉表示学习的超级注释信号,并自动拒绝无关的信息。整个AXM-NET经过Cuhk-Pedes数据的端到端训练。我们报告了两个任务的结果,即人搜索和跨模式重新ID。 AXM-NET优于当前最新方法(SOTA)方法,并在Cuhk-Pedes测试集上获得64.44 \%等级@1。在Crossre-ID和Cuhk-Sysu数据集中,它还胜过竞争对手的竞争对手$> $ 10 \%。
translated by 谷歌翻译
Diabetic Retinopathy (DR) is considered one of the primary concerns due to its effect on vision loss among most people with diabetes globally. The severity of DR is mostly comprehended manually by ophthalmologists from fundus photography-based retina images. This paper deals with an automated understanding of the severity stages of DR. In the literature, researchers have focused on this automation using traditional machine learning-based algorithms and convolutional architectures. However, the past works hardly focused on essential parts of the retinal image to improve the model performance. In this paper, we adopt transformer-based learning models to capture the crucial features of retinal images to understand DR severity better. We work with ensembling image transformers, where we adopt four models, namely ViT (Vision Transformer), BEiT (Bidirectional Encoder representation for image Transformer), CaiT (Class-Attention in Image Transformers), and DeiT (Data efficient image Transformers), to infer the degree of DR severity from fundus photographs. For experiments, we used the publicly available APTOS-2019 blindness detection dataset, where the performances of the transformer-based models were quite encouraging.
translated by 谷歌翻译
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
translated by 谷歌翻译